Artificial Creativity

Artonomo.us - beginnings of a completely autonomous painting robot.

Artonomo.us is my latest painting robot project.

In this project, one of my robots

1: Selects it’s own imagery to paint.

2: Applies it’s own style with Deep Learning Neural Networks.

3: Then paints with Feedback Loops.

An animated GIF of the process is below…

fullanim_withcombination.gif

Artonomo.us is currently working on a series of sixteen portraits in the style above. The portraits were selected from user submitted photos. It will be painting these portraits over the next couple weeks. When done it will select a new style and sixteen new user submitted portraits. If you would like to see more of these portraits, or submit yours for consideration, visit artonomo.us.

Below are the images selected for this round of portrait. We will publish the complete set when Artonomo.us is finished with it…

arto.jpg


Pindar

NY Art Critic Jerry Saltz thinks my Painting Robots have Good Taste

New York Magazine's Art Critic Jerry Saltz recently reviewed several AI generated pieces of art including my own. The piece begins at 19:45 in the above HBO Vice video. I was prepared for the worst, but pleased when he looked at this Portrait of Elle Reeve and commented that my robots have "good taste” and that “It doesn’t look like a computer made it. But he then paused and concluded "That doesn't make it any good."  But I loved the review since a couple of years ago the art world did not even consider what I was doing to be art. At least now it is considered bad art. That's progress!

jerry_saltz_vice_003.jpg

All the AI generated art got roasted, but at least I got what I considered to be the best review of all my colleagues which included some of the world's finest AI generated artwork.

Pindar

 

New Art Algorithm Discovered by autonymo.us

Have started a new project called autonymo.us where I have let one of my painting robots go off on its own. It experiments and tries new things, sometimes completely abstract, other times using a simple algorithm to complete a painting. Sometimes it uses different combinations of the couple dozens I have written for it.

Most of the results are horrendous. But sometimes it comes up with something really beautiful. I am putting the beautiful ones up at the website autonymo.us. But also thought I would share new algorithms discoveries here.

So the second algorithm we discovered looks really good on smiling faces. And it is really simple.

Step 1: Detect that subject has a big smile. Seriously cause this doesn’t look good otherwise.

Step 2: Isolate background and separate it from the faces.

Step 3: Quickly cover background with a mixture of teal & white paint.

Step 4: Use K-Means Clustering to organize pixels with respect to r, g, b, values and x, y coordinates.

Step 5: Paint light parts of painting in varying shades of pyrole orange.

Step 6: Paint dark parts of painting in crimsons, grays, and black.

A simple and fun algorithm to paint smiling portraits with. Here are a couple…

autonymo_us_003b.jpg
autonymo_us_004.jpg

Augmenting the Creativity of A Child with A.I.

The details behind my most recent painting are complex, but the impact is simple.

gallery05_image.jpg

My robots had a photoshoot with my youngest child, then painted a portrait with her in the style of one of her paintings. Straightforward A.I. by today’s standards, but who really cares how simple the process behind something is as long as the final results are emotionally relevant.

With this painting there were giggles as she painted alongside my robot and an amazed result when she saw the final piece develop over time, so it was a success.

The following image shows the inputs, my robot’s favorite photo from the shoot (top left) and a painting made by her (top middle). The A.I. then created a CNN style transfer to reimagine her face in the style of her painting (top right). As the robot worked on painting this image with feedback loops, she painted along on a touchscreen, giving the robot direction on how to create the strokes (bottom left). The robot then used her collaborative input, a variety of generative A.I., Deep Learning, and Feedback Loops to finish the painting one brushstroke at a time (bottom right).

ai_portrait.jpg

In essence, the robot was using a brush to remove the difference between an image that was dynamically changing in its memory with what it saw emerging on the canvas. A timelapse of the painting as it was being created is below…

corinne_anim_fast.gif

Pindar

Robot Art 2018 Award

For each of the past three years, internet entrepreneur Andrew Conru has sponsored a $100,000 international competition to build artistic robots called RobotArt.org. His challenge was to create beautiful paintings with machines and the only steadfast rule was that the robot had to use a brush... no inkjet printers allowed.

I have participated in each, and this year I was fortunate enough to be awarded the top prize. Here are three of the paintings I submitted. The first two pictured are AI generated portraits made with varying degrees of abstraction. The third is an AI generated study of a Cezanne masterpiece.

robotart_announcement.jpg

Second Place went to Evolutionary AI Visionary Hod Lipson, and third went to a team from Kasetsart University in Thailand.

Since the contest began more than 600 robotic paintings have been submitted for consideration by artist, teams, and Universities around the world including notable robots eDavid, A Roboto, TAIDA, and NORAA.  The form of the machines themselves have included robot arms, xy-tables, drones, and even a dancing snake. The works were half performance and half end product.  How and why the robots were painting was as an important part of the judging as what the paintings looked like. Videos of each robot in action were a popular attraction of the contest.

2016 Winner TAIDA painting Einstein

2016 Winner TAIDA painting Einstein

The style of the paintings was as varied as the robots. These five paintings from the 2018 competition show some of the variety. From left to right, these are the work of Canadian Artist Joanne Hastie, Japanese A Roboto, Columbia's Hod Lipson, Californian Team HHS, and Australian Artist Robert Todonai.

ra_final_1.jpg

Like all things technology related recently, the contest also saw an increase in the amount of AI used by the robots each year.

edavid.jpeg

In the first year only a couple of teams used AI including e-David and myself.  In the self portrait on the left, e-David used feedback loops to watch what it was painting and make adjustments accordingly, refining the painting by reducing error one brush stroke at a time.

While AI was unusual when the contest began, it has since become one of the most important tools for the robots. Many of the top entries, including mine, Hod Lipson's, and A Roboto used deep learning to create increasingly autonomous generative art systems. For some of the work it became unclear whether the system was simply being generative, or whether the robots were in fact achieving creativity.

This has been one of my favorite projects to work on in recent years. Couldn't be happier than to have won it this year.  If you are interested in seeing higher resolution images of my submissions as well as three that are for sale, you can see more in my CryptoArt gallery at superrare.co.

Emerging Faces - A Collaboration with 3D (aka Robert Del Naja of Massive Attack)

Have been working on and off for past several months with Bristol based artist 3D (aka Robert Del Naja, the founder of Massive Attack). We have been experimenting with applying GANs, CNNs, and many of my own artificial intelligent algorithms to his artwork.  I have long been working at encapsulating my own artistic process in code.  3D and I are now exploring if we can capture parts of his artistic process.

It all started simply enough with looking at the patterns behind his images. We started creating mash-ups by using CNNs and Style Transfer to combine the textures and colors of his paintings with one another.  It was interesting to see what worked and what didn't and to figure out what about each painting's imagery became dominant as they were combined

3d_cn-incest.jpg

As cool as these looked, we were both left underwhelmed by the symbolic and emotional aspects of the mash-ups. We felt the art needed to be meaningful.  All that was really be combined was color and texture, not symbolism or context. So we thought about it some more and 3D came up with the idea of trying to use the CNNs to paint portraits of historical figures that made significant contributions to printmaking.  Couple of people came to mind as we bounced ideas back and forth before 3D suggested Martin Luther. At first I thought he was talking about Martin Luther King Jr, which left me confused. But then when I realized he was talking about the the author of The 95 Theses and it made more sense. Not sure if 3D realized I was confused, but I think I played it off well and he didn't suspect anything. We tried applying CNNs to Martin Luther's famous portrait and got the following results.

luther_cnns_rob.jpg

It was nothing all that great, but I made a couple of paintings from it to test things.  Also tried to have my robots paint a couple of other new media figures like Mark Zuckerberg.

zuckerberg.jpg

Things still were not gelling though. Good paintings, but nothing great. Then 3D and I decided to try some different approaches. 

I showed him some GANs where I was working on making my robots imagine faces. Showed him how a really neat part of the GAN occurred right at the beginning when faces emerge from nothing.  I also showed him a 5x5 grid of faces that I have come to recognize as a common visualization when implementing GANs in tutorials.  We got to talking about how as a polyptych, it recalled a common Warhol trope except that there was something different.  Warhol was all about mass produced art and how cool repeated images looked next to one another.  But these images were even cooler, because it was a new kind of mass production. They were mass produced imagery made from neural networks where each image was unique.

widegans.jpg

I started having my GANs generate tens of thousands of faces.  But I didn't want the faces in too much detail.  I like how they looked before they resolved into clear images.  It reminded me of how my own imagination worked when I tried to picture things in my mind. It is foggy and non descript.  From there I tested several of 3D's paintings to see which would best render the imagined faces.

cnngan_facialcontext.jpg
gan_3d_models.jpg


3D's Beirut (Column 2) was the most interesting, so I chose that one and put it and the GANs into the process that I have been developing over the past fifteen years. A simplified outline of the artificially creative process it became can be seen in the graphic below. 
 

My robots would begin by having the GAN imagine faces. Then I ran the Viola-Jones face detection algorithm on the GAN images until it detected a face. At that point, right when the general outlines of faces emerged, I stopped the GAN.  Then I applied a CNN Style Transfer on the nondescript faces to render them in the style of 3D's Beirut. Then my robots started painting. The brushstroke geometry was taken out of my historic database that contains the strokes of thousands of paintings, including Picassos, Van Goghs, and my own work.  Feedback loops refined the image as the robot tried to paint the faces on 11"x14" canvases.  All told, dozens of AI algorithms, multiple deep learning neural networks, and feedback loops at all levels started pumping out face after face after face. 

Thirty-two original faces later it arrived at the following polyptych which I am calling the First Sparks of Artificial Creativity.  The series itself is something I have begun to refer to as Emerging Faces. I have already made an additional eighteen based on a style transfer of my own paintings, and plan to make many more.

ghostfaces_shadow.jpg

facesbigif.gif
beirutcomparison.jpg

Above is the piece in its entirety as well as an animation of it working on an additional face at an installation in Berlin.  You can also see a comparison of 3D's Beirut to some of the faces.  An interesting aspect of the artwork, is that despite how transformative the faces are from the original painting, the artistic DNA of the original is maintained with those seemingly random red highlights.

It has been a fascinating collaboration to date. Looking forward to working with 3D to further develop many of the ideas we have discussed. Though this explanation may appear to express a lot of of artificial creativity, it only goes into his art on a very shallow level.  We are always talking and wondering about how much deeper we can actually go.

Pindar

The First Sparks of Artificial Creativity

My robots paint with dozens of AI algorithms all constantly fighting for control. I imagine that our own brains are similar and often think of Minsky's Society of Minds. Where he theorizes our brains are not one mind, but many, all working with, for, and against each other.  This has always been an interesting concept and model for creativity for me. Much of my art is trying to create this mish mosh of creative capsules all fighting against one another for control of an artificially creative process.

Some of my robots' creative capsules are traditional AI. They use k-means clustering for palette reduction, viola-jones for facial recognition, hough lines to help plan stroke paths, among many others.  On top of that there are some algorithms that I have written myself to do things like try to measure beauty and create unique compositions. But the really interesting stuff that I am working with uses neural networks.  And the more I use neural networks, the more I see parallels between how these artificial neurons generate images and how my own imagination does.

Recently I have seen an interesting similarity between how a specific type of neural network called a Generative Adversarial Network (GAN) imagines unique faces compared to how my own mind does.  Working and experimenting with it, I am coming closer and closer to thinking that this algorithm might just be a part of the initial phases of imagination, the first sparks of creativity.  Full disclosure before I go on, I say this as an artist exploring artificial creativity.  So please regard any parallels I find as an artist's take on the subject. What exactly is happening in our minds would fall under the expertise of a neuroscientist and modeling what is happening falls in the realm of computational neuroscience, both of which I dabble in, but am by no means an expert.

Now that I have made clear my level of expertise (or lack thereof), there is actually an interesting thought experiment that I have come up with that helps illustrate the similarities I am seeing between how we imagine faces compared to how GANs do. For this thought experiment I am going to ask you to imagine a familiar face, then I am going to ask you to get creative and imagine an unfamiliar face. I will then show you how GANs "imagine" faces.  You will then be able to compare what went on in your own head with what went on in the artificial neural network and decide for yourself if there are any similarities.


Simple Mental Task - Imagine a Face

So the first simple mental task is to imagine the face of a loved one. Clear your mind and imagine a blank black space.  Now pull an image of your loved out of the darkness until you can imagine a picture of them in your mind's eye. Take a mental snapshot.


Creative Mental Task - Imagine an Unfamiliar Face

The second task is to do the exact same thing, but by imagining someone you have never seen before.  This is the creative twist. I want you to try to imagine a face you have never seen.  Once again begin by clearing your mind until there is nothing.  Then out of the darkness try to pull up an image of someone you have never seen before. Take a second mental snapshot.

This may have seemed harder, but we do it all the time when we do things like imagine what the characters of a novel might look like, or when we imagine the face of someone we talk to on the phone with, but have yet to meet. We are somehow generating these images in our mind, though it is not clear how because it happens so fast.
 

How Neural Nets Imagine Unfamiliar Faces

So now that you have tried to imagine an unfamiliar face, it is neat to see how neural networks try to do this. One of the most interesting methods involves the GANs I have been telling you about. GANs are actually two neural nets competing against one another, in this case to create images of unique faces from nothing. But before I can explain how two neural nets can imagine a face, I probably have to give a quick primer on what exactly a neural net is.

The simplest way to think about an artificial neural network is to compare it to our brain activity.  The following images show actual footage of live neuronal activity in our brain (left) compared to numbers cascading through an artificial neural network (right).

Live Neuronal Activity - courtesy of Michelle Kuykendal & Gareth Guvanasen

Live Neuronal Activity - courtesy of Michelle Kuykendal & Gareth Guvanasen

Artificial Neural Network

Artificial Neural Network

Our brains are a collection of more than a billion neurons with trillions of synapses. The firing of the neurons seen in the image on the left and the cascading of electrical impulses between them is basically responsible for everything we experience, every pattern we notice, and every prediction our brain makes.

The small artificial neural networks shown on the right is a mathematical model of this brain activity. To be clear it is not a model of all brain activity, that is computational neuroscience and much more complex, but it is a simple model of at least one type of brain activity. This artificial neural network in particular, is small collection of 50 artificial neurons with 268 artificial synapses where each artificial neuron is a mathematical function and each artificial synapses is a weighted value. These neural nets simulate neuronal activity by sending numbers through the matrix of connections converting one set of numbers to another. These numbers cascade through the artificial neural net similarly to how electrical impulses cascade through our minds. In the animation on the right, instead of showing the numbers cascading, I have shown the nodes and edges lighting up and when the numbers are represented like this, one can see the similarities between live neuronal activity and artificial neural networks.

While it may seem abstract to think how this could work, the following graphic shows one if its popular applications.  In this convolutional neural network an image is converted into pixel values, these numbers then enter the artificial neural network on one side, go through a lot of linear algebra, and eventually comes out the other side as a classification.  In this example, an image of a bridge is identified as a bridge with 65% certainty.

cnnexample.jpg

With this quick neural network primer, it is now interesting to go into more details of a face creating Generative Adversarial Network, which is two opposing neural nets. When these neural nets are configured just right, they can be pretty creative. Furthermore, closely examining how they work, I can't help but wonder if some structure similar to them is in our minds at the very beginning of when we try to imagine unfamiliar faces.

So here is how these adversarial neural nets fight against each other to generate faces from nothing.

The first of the two neural nets is called a Discriminator and it has been shown thousands of faces and understands the patterns found in typical faces. This neural net would master the first simple mental task I gave you.  Same as how you could pull the face of a loved one into your imagination, this neural net knows what thousands of faces looks like. Perhaps more importantly, however, when shown a new image, it can tell you whether or not that image is a face. This is the Discriminator's most important task in a GAN. It can discriminate between images of faces and images that are not faces, and also give some hints as to why it made that determination.

The second neural nets in a GAN is called a Generator. And while the Discriminator knows what thousands of faces looks like, this Generator is dumb as a bag of hammers. It doesn't know anything. It begins as a matrix of completely random numbers.

So here they are at the very beginning of the process ready to start imagining faces.

blog01.jpg

First thing that happens is the Generator guesses at what a face looks like and asks the Discriminator if it thinks the image is a faces or not.  But remember the Generator is completely naive and filled with random weights, so it begins by creating an image of random junk.

blog02.jpg

When determining whether or not this is a face, the Discriminator is obviously not fooled. The image looks nothing like faces. So it tells the Generator that the image looks nothing like faces, but at the same time give some hints about how to makes its next attempt a little more facelike. This is one of the really important steps. Beyond just telling the Generator that it failed to make a face, the Discriminator is also telling it what parts of the image worked, and the Generator is taking this input and changing itself before making the next attempt.

The Generator adjusts the weights of its neural network and 120 tries, rejections, and hints from the Discriminator later, it is still just static, but better static...

blogslides03.jpg

But then at attempt 200. ghosts images start to emerge out of the darkness...

blogslides04.jpg

and with each guess by the Generator, the images get more and more facelike...

at attempt 400,

at attempt 400,

600,

600,

1500,

1500,

and 4000.

and 4000.

After 4,000 attempts, rejections, and corrections from the Discriminator,  the Generator actually gets pretty good at making some pretty convincing faces.  Here is an animation from the very first attempt to the 4,000th iteration in 10 seconds. Keep in mind that the Generator has never seen or been shown a face.

ganfaces_2_square.gif

So How Does this Compare to How We Imagined Faces?

Early on we did the thought experiment and I told you that there would be similarities between how this GAN imagined faces and you did.  Well hopefully the above animation is not how you imagined an unfamiliar faces. If it was, well, you are probably a robot. Humans don't think like this, at least I don't.

But let's slow things down and look at what happened with the early guessing (between the Generators 180th and 400th attempts).

faces_animation_180_to_450.gif

This animation starts with darkness as nondescript faces slowly bubble out of nothing. They merge into one another, never taking on a full identity.

I am not saying that this was the entirety of my creative process. Nor am I saying this is how the human brain generates images, though I am curious what a neuroscientist would think about this. But when I tried to imagine an unfamiliar face, I cleared my mind and an image appeared from nothing. Even though it happens fast and I can not figure out the mechanisms doing it, it has to start forming from something. This leads me to wonder if a GAN or some similar structure in my mind began by comparing random thoughts in one part of my mind to my memory of how all my friends look in another part. I wonder if from this comparison my brain was able to bring an image out of nothing and into vague blurry fog, just like in this animation.

I think this is the third time that I am making the disclaimer that I am not a neuroscientist and do not know what exactly is happening in my mind. I wonder if any neuroscientist does actually. But I do know that our brains, like my painting robots, have many different ways of performing tasks and being creative.  GANs are by no means the only way, or even the most important part of artificial creativity, but looking at it as an artist, it is a convincing model for how imagination might be getting its first sparks of inspiration.  This model applies to all manner of creative tasks beyond painting. It might be how we first start imagining a new tune, or even come up with a new poem. We start with base knowledge, try to come up with random creative thoughts, compare those to our base knowledge, and adjust as needed over and over again. Isn't this creativity?  And if this is creativity, GANs are an interesting model of the very first steps.

I will leave you here with a series of GAN inspired paintings where my robots have painted the ghostlike faces just as they were emerging from the darkness...

allfaces.gif
The First Sparks of Artificial Creativity, 110"x42", Acrylic on Canvas, Pindar Van Arman w/ CloudPainter

The First Sparks of Artificial Creativity, 110"x42", Acrylic on Canvas, Pindar Van Arman w/ CloudPainter

 

 

 

Pindar  

Converting Art to Data

There is something gross about breaking a masterpiece down into statistics, but there is also something profoundly beautiful about it. 

Reproduced Cezanne's Houses at the L'Estaque with one of my painting robots using a combination of AI and personal collaboration.  One of the neat things about using the robot in these recreations, is that it saves each and every brush stroke. I can then go back and analyze the statistics behind the recreation.  Here are some quick visualizations...

cezanne.jpg

It is weird to think of something as emotional as art, as data.  But the more I work with combining the arts with artificial intelligence, the more I am beginning to think that everything is data. 

Below is the finished painting and an animation of each brush stroke.

cezanne_houses_salli.jpg

Can robots be creative? They Probably Already Are...

In this video I demonstrate many of the algorithms and approaches I have programmed into my painting robots in an attempt to give them creative autonomy. I hope to demonstrate that it is no longer a question of whether machines can be creative, but only a debate of whether their creations can be considered art.

So can robot's make art?
Probably not.

Can robots be creative?
Probably, and in the artistic discipline of portraiture, they are already very close to human parity.

Pindar Van Arman

Are My Robots Finally Creative?

After twelve years of trying to teach my robots to be more and more creative, I think I have reached a milestone. While I remain the artist of course, my robots no longer need any input from me to create unique original portraits. 

I will be releasing a short video with details shortly, but as can be seen in the slide above from a recent presentation, my robots can "imagine" faces with GANs, "imagine" a style with CANs, then paint the imagined face in the imagined style using CNNs. All the while evaluating its own work and progress with Feedback Loops. Furthermore, the Feedback Loops can use more CNNs to understand context from its historic database as well as find images in its own work and adjust painting both on a micro and macro level.

This process is so similar to how I paint portraiture, that I am beginning to question if there is any real difference between human and computational creativity. Is it art? No. But it is creative.

 

HBO Vice Piece on CloudPainter - The da Vinci Coder

Typically the pun applied to artistic robots make me cringe, but I actually liked HBO Vice's name for their segment on CloudPainter. they called me The Da Vinci Coder.  

Spent the day with them couple of weeks ago and really enjoyed their treatment of what I am trying to do with my art.  Not sure how you can access HBO Vice without HBO, but if you can it is a good description of where the state of the art is with artificial creativity.  If you can't, here are some stills from the episode and a brief description...

Hunter and I working on setting up a painting...

Screen Shot 2017-08-03 at 8.23.17 PM.png

One of my robots working on a portrait...

Elle asking some questions...

Cool shot of my paint covered hands...

One of my robots working on a portrait of Elle...

... and me walking Elle through some of the many algorithms, both borrowed and invented, that I use to get from a photograph of her to a finished stylized portrait below.

Robot Art 2017 - Top Technical Contributor

CloudPainter used deep learning, various open source AI, and some of our own custom algorithms to create 12 paintings for the 2017 Robot Art Contest. The robot and its software was awarded the Top Technical Contribution Award while the artwork it produced recieved 3rd place in the aesthetic competition.  You can see the other winners and competitors at www.robotart.org.

Below are some of the portraits we submitted.  

Portrait of Hank

Portrait of Hank

Portrait of Corinne

Portrait of Corinne

Portrait of Hunter

Portrait of Hunter

We chose to go an abstract route in this year's competition by concentrating on computational abstraction.  But not random abstraction. Each image began with a photoshoot, where CloudPainter's algorithms would then pick a favorite photo, create a balanced composition from it, and use Deep Learning to apply purposeful abstraction. The abstraction was not random but based on an attempt to learn from the abstraction of existing pieces of art whether it was from a famous piece, or from a painting by one of my children.

Full description of all the individual steps can be seen in the following video.

 

 

NVIDIA GTC 2017 Features CloudPainter's Deep Learning Portrait Algorithms

CloudPainter was recently featured in NVIDIA's GTC 2017 Keynote. As deep learning finds it way into more and more applications, this video highlight some of the more interesting applications. Our ten seconds comes around 100 seconds in, but I suggest watching the whole thing to see where the current state of the art in artificial intelligence stands.